34 research outputs found

    A combination of particle filtering and deterministic approaches for multiple kernel tracking.

    Get PDF
    International audienceColor-based tracking methods have proved to be efficient for their robustness qualities. The drawback of such global representation of an object is the lack of information on its spatial configuration, making difficult the tracking of more complex motions. This issue is overcome by using several kernels weighting pixels locations. In this paper a multiple kernels configuration is proposed and developed in both probabilistic and deterministic frameworks. The advantages of both approaches are combined to design a robust tracker allowing to track location, size and orientation of the object. A visual servoing application in tracking a moving object validates the proposed approach

    Blur aware metric depth estimation with multi-focus plenoptic cameras

    Full text link
    While a traditional camera only captures one point of view of a scene, a plenoptic or light-field camera, is able to capture spatial and angular information in a single snapshot, enabling depth estimation from a single acquisition. In this paper, we present a new metric depth estimation algorithm using only raw images from a multi-focus plenoptic camera. The proposed approach is especially suited for the multi-focus configuration where several micro-lenses with different focal lengths are used. The main goal of our blur aware depth estimation (BLADE) approach is to improve disparity estimation for defocus stereo images by integrating both correspondence and defocus cues. We thus leverage blur information where it was previously considered a drawback. We explicitly derive an inverse projection model including the defocus blur providing depth estimates up to a scale factor. A method to calibrate the inverse model is then proposed. We thus take into account depth scaling to achieve precise and accurate metric depth estimates. Our results show that introducing defocus cues improves the depth estimation. We demonstrate the effectiveness of our framework and depth scaling calibration on relative depth estimation setups and on real-world 3D complex scenes with ground truth acquired with a 3D lidar scanner.Comment: 21 pages, 12 Figures, 3 Table

    Direct 3D servoing using dense depth maps

    Get PDF
    International audienceThis paper proposes a novel 3D servoing approach using dense depth maps to achieve robotic tasks. With respect to position-based approaches, our method does not require the estimation of the 3D pose (direct), nor the extraction and matching of 3D features (dense) and only requires dense depth maps provided by 3D sensors. Our approach has been validated in servoing experiments using the depth information from a low cost RGB-D sensor. Positioning tasks are properly achieved despite the noisy measurements, even when partial occlusions or scene modifications occur

    Leveraging blur information for plenoptic camera calibration

    Full text link
    This paper presents a novel calibration algorithm for plenoptic cameras, especially the multi-focus configuration, where several types of micro-lenses are used, using raw images only. Current calibration methods rely on simplified projection models, use features from reconstructed images, or require separated calibrations for each type of micro-lens. In the multi-focus configuration, the same part of a scene will demonstrate different amounts of blur according to the micro-lens focal length. Usually, only micro-images with the smallest amount of blur are used. In order to exploit all available data, we propose to explicitly model the defocus blur in a new camera model with the help of our newly introduced Blur Aware Plenoptic (BAP) feature. First, it is used in a pre-calibration step that retrieves initial camera parameters, and second, to express a new cost function to be minimized in our single optimization process. Third, it is exploited to calibrate the relative blur between micro-images. It links the geometric blur, i.e., the blur circle, to the physical blur, i.e., the point spread function. Finally, we use the resulting blur profile to characterize the camera's depth of field. Quantitative evaluations in controlled environment on real-world data demonstrate the effectiveness of our calibrations.Comment: arXiv admin note: text overlap with arXiv:2004.0774

    Détection de pose de véhicule pour la reconnaissance de marque et modèle.

    Get PDF
    International audienceNous présentons une nouvelle méthode de détection de pose d'un véhicule dans une image dans le but de reconnaître sa marque et son modèle. Notre approche repose sur la mise en correspondance entre le véhicule dans l'image et des modèles 3D rigides. En utilisant un détecteur fondé sur des réseaux de neurones convolutionnels (CNN), des points d'intérêts correspondant à des parties prédéfinies sur le véhicule sont extraits sur l'image. Ces points seront ensuite filtrés et mis en correspondance avec les points des modèles 3D. Notre méthode permet d'améliorer les performances de la détection de pose et est plus adaptée pour la reconnaissance de marque et modèle de véhicules que des approches basées sur un modèle déformable. </p

    Stage-Wise Learning of Reaching Using Little Prior Knowledge

    Get PDF
    In some manipulation robotics environments, because of the difficulty of precisely modeling dynamics and computing features which describe well the variety of scene appearances, hand-programming a robot behavior is often intractable. Deep reinforcement learning methods partially alleviate this problem in that they can dispense with hand-crafted features for the state representation and do not need pre-computed dynamics. However, they often use prior information in the task definition in the form of shaping rewards which guide the robot toward goal state areas but require engineering or human supervision and can lead to sub-optimal behavior. In this work we consider a complex robot reaching task with a large range of initial object positions and initial arm positions and propose a new learning approach with minimal supervision. Inspired by developmental robotics, our method consists of a weakly-supervised stage-wise procedure of three tasks. First, the robot learns to fixate the object with a 2-camera system. Second, it learns hand-eye coordination by learning to fixate its end-effector. Third, using the knowledge acquired in the previous steps, it learns to reach the object at different positions and from a large set of initial robot joint angles. Experiments in a simulated environment show that our stage-wise framework yields similar reaching performances, compared with a supervised setting without using kinematic models, hand-crafted features, calibration parameters or supervised visual modules

    Approches déterministes et bayésiennes pour un suivi robuste : application à l'asservissement visuel d'un drone

    Get PDF
    For a robot to autonomously localise or position itself with respect to its environ- ment, a key requirement is the perception of this environment. In this respect, the visual information provided by a camera is a particularly rich source of information, commonly used in robotics. Our work deals with the use of visual information in the context of UAV control. More speciﰀcally, two tasks have been considered : ﰀrst, a tracking task, in which a UAV has to follow a moving object - a car - in an unknown environment, second, a positioning task for a UAV navigating in a structured GPS-deprived environment. In both cases, we propose full approaches, considering both the robust extraction of appropriate visual informations and the visual servoing of the UAV. The experi- ments performed on a small quadrotor UAV show the validity of our approaches.Pour qu'un système robotisé puisse accomplir de façon autonome des fonctions en apparence simples, telles que se localiser ou se positionner par rapport à son environnement, il doit avant tout percevoir cet environnement. La perception vi- suelle obtenue à l'aide d'une caméra constitue à cet égard une source d'information particulièrement riche, largement utilisée en robotique. Le travail présenté dans cette thèse concerne l'usage d'informations visuelles dans le contexte de la commande de mini-drones. En particulier deux types de tâches ont été considérées : une tâche de poursuite, dans laquelle un objet - une voiture - se déplace dans un environnement inconnu et l'on souhaite qu'un drone puisse suivre son mouvement, et une tâche de positionnement ou de navigation pour un drone évoluant dans un environnement structuré - intérieur de bâtiment - dans lequel le signal GPS n'est pas disponible. Dans les deux cas, nous avons proposé des approches complètes, depuis l'extrac- tion robuste d'informations visuelles jusqu'à la commande d'un drone à partir de ces informations. Des expériences mises en ÷uvre sur un mini-drone quadrirotor montrent la validité des approches proposées. Mots-clefs : Vision par ordinateur, asservissement visuel, commande de dron

    Étalonnage automatique d'un système d'acquisition Caméras - Centrale inertielle - Lidar 3D

    Get PDF
    National audienceThis article presents a fully automated calibration method suitable for complex acquisition systems made with one or more cameras, an inertial measurement unit and a 3D lidar. The principle consists in estimating the intrinsic and extrinsic parameters by matching features detected in the camera images with the 3D point cloud provided by the rangefinder. This work proposes a mathematical formalization for the unification of the three types of sensors within the same likelihood function and a minimization algorithm with three families of parameters that enables the simultaneous estimation of all calibration parameters. Experiments conducted on synthetic acquisition systems and real sequences are presented and assess the area of convergence of the proposed approach and its performance in terms of accuracy, correctness and robustness in the presence of noise.Cet article présente une méthode entièrement automatique d'étalonnage de systèmes d'acquisition complexes comprenant une ou plusieurs caméras, une centrale inertielle et un lidar 3D. Le principe consiste à estimer les paramètres intrinsèques et extrinsèques en met-tant en correspondance des primitives détectées dans les images des caméras avec le nuage de points 3D fourni par le télémètre. Ce travail propose une formalisation mathématique unifiant les trois types de capteurs au sein d'une même fonction de vraisemblance, une stratégie pour l'évaluation rapide de contraintes entre les données images et les données télémétriques, et en-fin, l'utilisation d'un algorithme de minimisation à quatre familles de paramètres qui permet une estimation simultanée de tous les paramètres d'étalonnage. Des expériences réalisées sur des systèmes d'acquisition synthétiques et réelles évaluent le domaine de convergence de l'ap-proche proposée ainsi que ses performances en termes de précision et de robustesse en présence de bruit

    3D model-based tracking for UAV indoor localisation

    Get PDF
    International audienceThis paper proposes a novel model-based tracking approach for 3D localisation. One main difficulty of standard model-based approach lies in the presence of low-level ambigui- ties between different edges. In this work, given a 3D model of the edges of the environment, we derive a multiple hypotheses tracker which retrieves the potential poses of the camera from the observations in the image. We also show how these candidate poses can be integrated into a particle filtering framework to guide the particle set toward the peaks of the distribution. Motivated by the UAV indoor localisation problem where GPS signal is not available, we validate the algorithm on real image sequences from UAV flights
    corecore